doomsday scenario
I Launched the AI Safety Clock. Here's What It Tells Us About Existential Risks
If uncontrolled artificial general intelligence--or "God-like" AI--is looming on the horizon, we are now about halfway there. Every day, the clock ticks closer to a potential doomsday scenario. That's why I introduced the AI Safety Clock last month. My goal is simple: I want to make clear that the dangers of uncontrolled AGI are real and present. The Clock's current reading--29 minutes to midnight--is a measure of just how close we are to the critical tipping point where uncontrolled AGI could bring about existential risks.
- Asia > Russia (0.30)
- Europe > Ukraine (0.15)
- North America > United States > California (0.06)
- Europe > Russia (0.05)
- Energy (0.72)
- Government > Regional Government > Europe Government (0.49)
AI doomsday warnings a distraction from the danger it already poses, warns expert
Focusing on doomsday scenarios in artificial intelligence is a distraction that plays down immediate risks such as the large-scale generation of misinformation, according to a senior industry figure attending this week's AI safety summit. Aidan Gomez, co-author of a research paper that helped create the technology behind chatbots, said long-term risks such as existential threats to humanity from AI should be "studied and pursued", but that they could divert politicians from dealing with immediate potential harms. "I think in terms of existential risk and public policy, it isn't a productive conversation to be had," he said. "As far as public policy and where we should have the public-sector focus – or trying to mitigate the risk to the civilian population – I think it forms a distraction, away from risks that are much more tangible and immediate." Gomez is attending the two-day summit, which starts on Wednesday, as chief executive of Cohere, a North American company that makes AI tools for businesses including chatbots.
Why Elon Musk is wrong about pausing AI development - CapX
Panic about new technologies is nothing new, and artificial intelligence is no exception. This week more than 1,800 people have signed an open letter calling for at least a six-month pause on training AI systems that are'more powerful than GPT-4′ – the latest chatbot released by Open AI. The signatories – who include the likes of Elon Musk, Andrew Yang and Steve Wozniak – want governments to impose a moratorium if AI labs don't stop their research voluntarily. Meanwhile here in the UK, the Government recently released its own AI regulation strategy. The letter cites a number of concerns about AI: 1) disseminating dis/misinformation 2) ushering in a period of widespread unemployment, and 3) the creation of nefarious robot overlords.
- North America > United States (0.29)
- Europe > United Kingdom (0.25)
- Government (1.00)
- Media > News (0.51)
- Banking & Finance > Economy (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.99)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.90)
The Progress Of AI - AI Summary
Look at this: college students are sharing (anonymously) that they've started using AI tools to generate essays that can bypass anti-plagiarism software and score an A. The widespread use of the tools could reshape education and force schools to figure out new writing prompts or entirely fresh ways of assessing student performance to avoid being duped by the technology. Most projections have the AI niche reaching over $420 billion in total market size by 2028, a compound annual growth rate of 39.4 percent. Google is in talks to invest at least $200 million into AI start-up Cohere Inc., according to people familiar with the matter; another sign of the escalating arms race among large technology companies in the sector. There are some harbored fears surrounding AI ranging from doomsday scenarios to simple ethics concerns, but the overall trend is clear, and investors seem to have confidence that humanity will make the necessary adjustments to coexist with this new tech. Look at this: college students are sharing (anonymously) that they've started using AI tools to generate essays that can bypass anti-plagiarism software and score an A. The widespread use of the tools could reshape education and force schools to figure out new writing prompts or entirely fresh ways of assessing student performance to avoid being duped by the technology.
A beginner's guide to the AI apocalypse: Killer robots
Welcome to the fifth article in TNW's guide to the AI apocalypse. In this series we examine some of the most popular doomsday scenarios prognosticated by modern AI experts. Previous articles in this series include: Misaligned Objectives, Artificial Stupidity, Wall-E Syndrome, and Humanity Joins the Hivemind. We've danced around the subject of killer robots in the previous four editions in this series, but it's time to look the machines in their beady red eyes and… speculate. First things first: the reason why we haven't covered'killer robots' in this series so far is because it's an incredibly unlikely doomsday scenario. But, even in today's modern age of AI-everything, that particular scenario remains highly unlikely.
The dangers and benefits of Artificial Intelligence techsocialnetwork
The threat of Artificial Intelligence (AI) used to be nothing more than a science fiction doomsday scenario. Today, an AI threat is a very real possibility, and could be more disastrous than nuclear weapons or World War Three. Over the years, AI has been advancing at an alarming rate. In fact, it is precisely AI's exponential rate of improvement that makes it so dangerous. Should we decide to follow through with making AI sentient and giving it free will, it could, as many science fiction narratives have suggested, see humans as a problem and decide to do something about it.
Don't Let Artificial Intelligence Supercharge Bad Processes
When artificial intelligence is used to expedite certain legacy processes, it can act more like a Band-Aid than a cure. Scenarios describing the potential for artificial intelligence (AI) seem to gravitate toward hyperbole. In wonderful scenarios, AI enables nirvanas of instant optimal processes and prescient humanoids. In doomsday scenarios, algorithms go rogue and humans are superfluous, at best, and, at worst, subservient to the new silicon masters. However, both of these scenarios require a sophistication that, at least right now, seems far away.
The Simplistic Debate Over Artificial Intelligence
Man will only become better when you make him see what he is like. The disagreement this summer between two currently-reigning U.S. tech titans has brought new visibility to the debate about possible risks of Artificial Intelligence. Tesla and SpaceX CEO Elon Musk has been warning since 2014 about the doomsday potential of runaway AI. Along the way, Musk's views have been challenged by many but the debate went mainstream when another billionaire celebrity CEO, Facebook CEO Mark Zuckerberg took issue with what he believes to be Musk's alarmist view. This tiff of the titans is worth noting because of the high stakes and because their opposing views represent the most common schism over AI.
Google's AI Chief: 'Definitely Not Worried About AI Apocalypse'
Zuckerberg, who spent the summer sparring with Tesla's Elon Musk over the risks of ever-advancing artificial intelligence in our technology, got some support from Google's head of search and AI, John Giannandrea, who spoke recently about what he called the "huge amount of unwarranted hype around AI right now." Speaking at the TechCrunch Disrupt conference in San Francisco on Tuesday, Giannandrea echoed some of the Facebook co-founder's recent statements dismissing doomsday scenarios in which AI-empowered machines pose an inherent existential threat to their human creators. "This leap into, 'Somebody is going to produce a superhuman intelligence, and then there's going to be all these ethical issues' is unwarranted and borderline irresponsible," Giannandrea said at the conference. Google's AI chief added: "I'm definitely not worried about the AI apocalypse." Giannandrea went on to explain the importance of machine learning and artificial intelligence in revolutionizing the technology industry. Google uses AI to power features like Google Translate, the online tool that can instantly translate both spoken words and typed text, as well as products that help users search for new jobs online and provide you with ready-made replies to messages in Google's Gmail, among countless other applications.
- North America > United States > California > San Francisco County > San Francisco (0.26)
- North America > Canada > Quebec > Montreal (0.06)
emotions; the connection between AI and us – George Achillias – Medium
"It is not necessary to change. Since its beginnings in the 1950s, artificial intelligence has been a favourite matter of scientific discipline literature. Yet today, AI has entered the region of fact: several studies underline that intelligent machines will change the way we work, we move and even how wars are fought. Innovators and scientists around the world believe that now is the time to ensure that AI can override humanity. And even if sufficient and strong arguments suggest that machines could one day be more intelligent than we are, many scientists are ready to accept that challenge. Every day we read articles or a story about AI, machine learning and how these two can shape our lives. There is also not even one day not to read about how risky for our society the so-called real pragmatic application and approach of AI can be if we don't take the appropriate precautions. Two years ago, on Twitter Elon Musk was clear: "We need to be super careful with artificial intelligence.
- North America > United States > New York (0.04)
- Asia > Middle East > Israel (0.04)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.68)
- Information Technology (0.68)